Search Results: "dan"

4 March 2024

Paulo Henrique de Lima Santana: Bits from FOSDEM 2023 and 2024

Link para vers o em portugu s

Intro Since 2019, I have traveled to Brussels at the beginning of the year to join FOSDEM, considered the largest and most important Free Software event in Europe. The 2024 edition was the fourth in-person edition in a row that I joined (2021 and 2022 did not happen due to COVID-19) and always with the financial help of Debian, which kindly paid my flight tickets after receiving my request asking for help to travel and approved by the Debian leader. In 2020 I wrote several posts with a very complete report of the days I spent in Brussels. But in 2023 I didn t write anything, and becayse last year and this year I coordinated a room dedicated to translations of Free Software and Open Source projects, I m going to take the opportunity to write about these two years and how it was my experience. After my first trip to FOSDEM, I started to think that I could join in a more active way than just a regular attendee, so I had the desire to propose a talk to one of the rooms. But then I thought that instead of proposing a tal, I could organize a room for talks :-) and with the topic translations which is something that I m very interested in, because it s been a few years since I ve been helping to translate the Debian for Portuguese.

Joining FOSDEM 2023 In the second half of 2022 I did some research and saw that there had never been a room dedicated to translations, so when the FOSDEM organization opened the call to receive room proposals (called DevRoom) for the 2023 edition, I sent a proposal to a translation room and it was accepted! After the room was confirmed, the next step was for me, as room coordinator, to publicize the call for talk proposals. I spent a few weeks hoping to find out if I would receive a good number of proposals or if it would be a failure. But to my happiness, I received eight proposals and I had to select six to schedule the room programming schedule due to time constraints . FOSDEM 2023 took place from February 4th to 5th and the translation devroom was scheduled on the second day in the afternoon. Fosdem 2023 The talks held in the room were these below, and in each of them you can watch the recording video. And on the first day of FOSDEM I was at the Debian stand selling the t-shirts that I had taken from Brazil. People from France were also there selling other products and it was cool to interact with people who visited the booth to buy and/or talk about Debian.
Fosdem 2023

Fosdem 2023
Photos

Joining FOSDEM 2024 The 2023 result motivated me to propose the translation devroom again when the FOSDEM 2024 organization opened the call for rooms . I was waiting to find out if the FOSDEM organization would accept a room on this topic for the second year in a row and to my delight, my proposal was accepted again :-) This time I received 11 proposals! And again due to time constraints, I had to select six to schedule the room schedule grid. FOSDEM 2024 took place from February 3rd to 4th and the translation devroom was scheduled for the second day again, but this time in the morning. The talks held in the room were these below, and in each of them you can watch the recording video. This time I didn t help at the Debian stand because I couldn t bring t-shirts to sell from Brazil. So I just stopped by and talked to some people who were there like some DDs. But I volunteered for a few hours to operate the streaming camera in one of the main rooms.
Fosdem 2024

Fosdem 2024
Photos

Conclusion The topics of the talks in these two years were quite diverse, and all the lectures were really very good. In the 12 talks we can see how translations happen in some projects such as KDE, PostgreSQL, Debian and Mattermost. We had the presentation of tools such as LibreTranslate, Weblate, scripts, AI, data model. And also reports on the work carried out by communities in Africa, China and Indonesia. The rooms were full for some talks, a little more empty for others, but I was very satisfied with the final result of these two years. I leave my special thanks to Jonathan Carter, Debian Leader who approved my flight tickets requests so that I could join FOSDEM 2023 and 2024. This help was essential to make my trip to Brussels because flight tickets are not cheap at all. I would also like to thank my wife Jandira, who has been my travel partner :-) Bruxelas As there has been an increase in the number of proposals received, I believe that interest in the translations devroom is growing. So I intend to send the devroom proposal to FOSDEM 2025, and if it is accepted, wait for the future Debian Leader to approve helping me with the flight tickets again. We ll see.

2 March 2024

Ravi Dwivedi: Malaysia Trip

Last month, I had a trip to Malaysia and Thailand. I stayed for six days in each of the countries. The selection of these countries was due to both of them granting visa-free entry to Indian tourists for some time window. This post covers the Malaysia part and Thailand part will be covered in the next post. If you want to travel to any of these countries in the visa-free time period, I have written all the questions asked during immigration and at airports during this trip here which might be of help. I mostly stayed in Kuala Lumpur and went to places around it. Although before the trip, I planned to visit Ipoh and Cameron Highlands too, but could not cover it during the trip. I found planning a trip to Malaysia a little difficult. The country is divided into two main islands - Peninsular Malaysia and Borneo. Then there are more islands - Langkawi, Penang island, Perhentian and Redang Islands. Reaching those islands seemed a little difficult to plan and I wish to visit more places in my next Malaysia trip. My first day hostel was booked in Chinatown part of Kuala Lumpur, near Pasar Seni LRT station. As soon as I checked-in and entered my room, I met another Indian named Fletcher, and after that we accompanied each other in the trip. That day, we went to Muzium Negara and Little India. I realized that if you know the right places to buy what you want, Malaysia could be quite cheap. Malaysian currency is Malaysian Ringgit (MYR). 1 MYR is equal to 18 INR. For 2 MYR, you can get a good masala tea in Little India and it costs like 4-5 MYR for a masala dosa. The vegetarian food has good availability in Kuala Lumpur, thanks to the Tamil community. I also tried Mee Goreng, which was vegetarian, and I found it fine in terms of taste. When I checked about Mee Goreng on Wikipedia, I found out that it is unique to Indian immigrants in Malaysia (and neighboring countries) but you don t get it in India!
Mee Goreng, a dish made of noodles in Malaysia.
For the next day, Fletcher had planned a trip to Genting Highlands and pre booked everything. I also planned to join him but when we went to KL Sentral to take the bus, his bus tickets were sold out. I could take a bus at a different time, but decided to visit some other place for the day and cover Genting Highlands later. At the ticket counter, I met a family from Delhi and they wanted to go to Genting Highlands but due to not getting bus tickets for that day, they decided to buy a ticket for the next day and instead planned for Batu Caves that day. I joined them and went to Batu Caves. After returning from Batu Caves, we went our separate ways. I went back and took rest at my hostel and later went to Petronas Towers at night. Petronas Towers is the icon of Kuala Lumpur. Having a photo there was a must. I was at Petronas Towers at around 9 PM. Around that time, Fletcher came back from Genting Highlands and we planned to meet at KL Sentral to head for dinner.
Me at Petronas Towers.
We went back to the same place as the day before where I had Mee Goreng. This time we had dosa and a masala tea. Their masala tea from the last day was tasty and that s why I was looking for them in the first place. We also met a Malaysian family having Indian ancestry dining there and had a nice conversation. Then we went to a place to eat roti canai in Pasar Seni market. Roti canai is a popular non-vegetarian dish in Malaysia but I took the vegetarian version.
Photo with Malaysians.
The next day, we went to Berjaya Time Square shopping place which sells pretty cheap items for daily use and souveniers too. However, I bought souveniers from Petaling Street, which is in Chinatown. At night, we explored Bukit Bintang, which is the heart of Kuala Lumpur and is famous for its nightlife. After that, Fletcher went to Bangkok and I was in Malaysia for two more days. Next day, I went to Genting Highlands and took the cable car, which had awesome views. I came back to Kuala Lumpur by the night. The remaining day I just roamed around in Bukit Bintang. Then I took a flight for Bangkok on 7th Feb, which I will cover in the next post. In Malaysia, I met so many people from different countries - apart from people from Indian subcontinent, I met Syrians, Indonesians (Malaysia seems to be a popular destination for Indonesian tourists) and Burmese people. Meeting people from other cultures is an integral part of travel for me. My expenses for Food + Accommodation + Travel added to 10,000 INR for a week in Malaysia, while flight costs were: 13,000 INR (Delhi to Kuala Lumpur) + 10,000 INR (Kuala Lumpur to Bangkok) + 12,000 INR (Bangkok to Delhi). For OpenStreetMap users, good news is Kuala Lumpur is fairly well-mapped on OpenStreetMap.

Tips
  • I bought local SIM from a shop at KL Sentral station complex which had news in their name (I forgot the exact name and there are two shops having news in their name) and it was the cheapest option I could find. The SIM was 10 MYR for 5 GB data for a week. If you want to make calls too, then you need to spend extra 5 MYR.
  • 7-Eleven and KK Mart convenience stores are everywhere in the city and they are open all the time (24 hours a day). If you are a vegetarian, you can at least get some bread and cheese from there to eat.
  • A lot of people know English (and many - Indians, Pakistanis, Nepalis - know Hindi) in Kuala Lumpur, so I had no language problems most of the time.
  • For shopping on budget, you can go to Petaling Street, Berjaya Time Square or Bukit Bintang. In particular, there is a shop named I Love KL Gifts in Bukit Bintang which had very good prices. just near the metro/monorail stattion. Check out location of the shop on OpenStreetMap.

1 March 2024

Junichi Uekawa: March.

March. Busy days.

28 February 2024

Daniel Lange: Opencollective shutting down

Update 28.02.2024 19:45 CET: There is now a blog entry at https://blog.opencollective.com/open-collective-official-statement-ocf-dissolution/ trying to discern the legal entities in the Open Collective ecosystem and recommending potential ways forward.
Gee, there is nothing on their blog yet, but I just [28.02.2023 00:07 CET] received this email from Mike Strode, Program Officer at the Open Collective Foundation: Dear Daniel Lange, It is with a heavy heart that I'm writing today to inform you that the Board of Directors of the Open Collective Foundation (OCF) has made the difficult decision to dissolve OCF, effective December 31, 2024. We are proud of the work we have been able to do together. We have been honored to build community with you and the hundreds of other collectives hosted at the Open Collective Foundation. What you need to know: We are beginning a staged dissolution process that will allow our over 600 collectives the time to close or transition their work. Dissolving OCF will take many months, and involves settling all liabilities while spending down all funds in a legally compliant manner. Our priority is to support our collectives in navigating this change. We want to provide collectives the longest possible runway to wind down or transition their operations while we focus on the many legal and financial tasks associated with dissolving a nonprofit. March 15 is the last day to accept donations. You will have until September 30 to work with us to develop and implement a plan to spend down the money in your fund. Key dates are included at the bottom of this email. We know this is going to be difficult, and we will do everything we can to ease the transition for you. How we will support collectives: It remains our fiduciary responsibility to safeguard each collective's charitable assets and ensure funds are used solely for specified charitable purposes. We will be providing assistance and support to you, whether you choose to spend out and close down your collective or continue your work through another 501(c)(3) organization or fiscal sponsor. Unfortunately, we had to say goodbye to several of our colleagues today as we pare down our core staff to reduce costs. I will be staying on staff to support collectives through this transition, along with Wayne Kleppe, our Finance Administrator. What led to this decision: From day one, OCF was committed to experimentation and innovation. We were dedicated to finding new ways to open up the nonprofit space, making it easier for people to raise and access funding so they can do good in their communities. OCF was created by Open Collective Inc. (OCI), a company formed in 2015 with the goal of "enabling groups to quickly set up a collective, raise funds and manage them transparently." Soon after being founded by OCI, OCF went through a period of rapid growth. We responded to increased demand arising from the COVID-19 pandemic without taking the time to establish the appropriate systems and infrastructure to sustain that growth. Unfortunately, over the past year, we have learned that Open Collective Foundation's business model is not sustainable with the number of complex services we have offered and the fees we pay to the Open Collective Inc. tech platform. In late 2023, we made the decision to pause accepting new collectives in order to create space for us to address the issues. Unfortunately, it became clear that it would not be financially feasible to make the necessary corrections, and we determined that OCF is not viable. What's next: We know this news will raise questions for many of our collectives. We will be making space for questions and reactions in the coming weeks. In the meantime, we have developed this FAQ which we will keep updated as more questions come in. What you need to do next: Dates to know: In Care & Accompaniment,
Mike Strode
Program Officer
Open Collective Foundation Our mailing address has changed! We are now located at 440 N. Barranca Avenue #3717, Covina, CA 91723, USA

12 February 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, January 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In January, 16 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 14.0h (out of 7.0h assigned and 7.0h from previous period).
  • Bastien Roucari s did 22.0h (out of 16.0h assigned and 6.0h from previous period).
  • Ben Hutchings did 14.5h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 9.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 10.0h (out of 14.75h assigned and 27.0h from previous period), thus carrying over 31.75h to the next month.
  • Guilhem Moulin did 9.75h (out of 25.0h assigned), thus carrying over 15.25h to the next month.
  • Holger Levsen did 3.5h (out of 12.0h assigned), thus carrying over 8.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. S nchez did 8.75h (out of 9.5h assigned and 2.5h from previous period), thus carrying over 3.25h to the next month.
  • Santiago Ruano Rinc n did 13.5h (out of 8.25h assigned and 7.75h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 0.5h (out of 0.25h assigned and 5.75h from previous period), thus carrying over 5.5h to the next month.
  • Sylvain Beucler did 9.5h (out of 23.25h assigned and 18.5h from previous period), thus carrying over 32.25h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 10.25h assigned and 1.75h from previous period).
  • Utkarsh Gupta did 8.5h (out of 35.75h assigned), thus carrying over 24.75h to the next month.

Evolution of the situation In January, we have released 25 DLAs. A variety of particularly notable packages were updated during January. Among those updates were the Linux kernel (both versions 5.10 and 4.19), mariadb-10.3, openjdk-11, firefox-esr, and thunderbird. In addition to the many other LTS package updates which were released in January, LTS contributors continue their efforts to make impactful contributions both within the Debian community.

Thanks to our sponsors Sponsors that joined recently are in bold.

11 February 2024

Junichi Uekawa: Adding a new date entry using GAS on Google Docs.

Adding a new date entry using GAS on Google Docs. When I am writing a diary, or having a weekly meeting, it is sometimes tedious to add today's date every time. I can automate such tasks with GAS. There's a onOpen method that gets called every time a doc is opened, and you can use that to implement adding the latest date when the latest date does not match.

9 February 2024

Reproducible Builds (diffoscope): diffoscope 256 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 256. This version includes the following changes:
* CVE-2024-25711: Use a determistic name when extracting content from GPG
  artifacts instead of trusting the value of gpg's --use-embedded-filenames.
  This prevents a potential information disclosure vulnerability that could
  have been exploited by providing a specially-crafted GPG file with an
  embedded filename of, say, "../../.ssh/id_rsa".
  Many thanks to Daniel Kahn Gillmor <dkg@debian.org> for reporting this
  issue and providing feedback.
  (Closes: reproducible-builds/diffoscope#361)
* Temporarily fix support for Python 3.11.8 re. a potential regression
  with the handling of ZIP files. (See reproducible-builds/diffoscope#362)
You find out more by visiting the project homepage.

2 February 2024

Junichi Uekawa: February already.

February already. I was planning on doing some Debian Rust stuff but then I need to reconstruct my environment.

Scarlett Gately Moore: Some exciting news! Kubuntu: I m back!!!

It s official, the Kubuntu Council has hired me part time to work on the 24.04 LTS release, preparation for Plasma 6, and to bring life back into the Distribution. First I want thank the Kubuntu Council for this opportunity and I plan a long and successful journey together!!!! My first week ( I started midweek ): It has been a busy one! Many meet and greets with the team and other interested parties. I had the chance to chat with Mike from Kubuntu Focus and I have to say I am absolutely amazed with the work they have done, and if you are in the market for a new laptop, you must check these out!!! https://kfocus.org Or if you want to try before you buy you can download the OS! All they ask is for an e-mail, which is completely reasonable. Hosting isn t free! Besides, you can opt out anytime and they don t share it with anyone. I look forward to working closely with this project. We now have a Kubuntu Team in KDE invent https://invent.kde.org/teams/distribution-kubuntu if you would like to join us, please don t hesitate to ask! I have started a new Wiki and our first page is the ever important Bug triaging! It is still a WIP but you can check it out here: https://invent.kde.org/teams/distribution-kubuntu/docs/-/wikis/Bug-Triage-Story-WIP , with that said I have started the launchpad work to make tracking our bugs easier buy subscribing kubuntu-bugs to all our packages and creating proper projects for our packages missing them. We have compiled a list of our various documentation links that need updated and Rick Timmis is updating kubuntu.org! Aaron Honeycutt has been busy with the Kubuntu Manual https://github.com/kubuntu-team/kubuntu-manual which is in good shape. We just need to improve our developer story  I have been working on the rather massive Apparmor bug https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046844 with testing the fixes from the ppa and writing profiles for the various KDE packages affected ( pretty much anything that uses webengine ) and making progress there. My next order of business staging Frameworks 5.114 with guidance from our super awesome Rik Mills that has been doing most of the heavy lifting in Kubuntu for many years now. So thank you for that Rik  I will also start on our big transition to the Calamaras Installer! I do have experience here, so I expect it will be a smooth one. I am so excited for the future of Kubuntu and the exciting things to come! With that said, the Kubuntu funding is community donation driven. There is enough to pay me part time for a couple contracts, but it will run out and a full-time contract would be super awesome. I am reaching out to anyone enjoying Kubuntu and want to help with the future of Kubuntu to please consider a donation! We are working on more donation options, but for now you can donate through paypal at https://kubuntu.org/donate/ Thank you!!!!!

31 January 2024

Valhalla's Things: Macrame Bookbag

Posted on January 31, 2024
Tags: madeof:atoms, craft:macrame
a macrame bag in ~3 mm ecru yarn, with very irregular knots of different types, holding a book with a blue cover. The bottom part has a rigid single layer triangle and a fringe. In late 2022 I prepared a batch of drawstring backpacks in cotton as reusable wrappers for Christmas gifts; however I didn t know what cord to use, didn t want to use paracord, and couldn t find anything that looked right in the local shops. With Christmas getting dangerously closer, I visited a craft materials website for unrelated reasons, found out that they sold macrame cords, and panic-bought a few types in the hope that at least one would work for the backpacks. I got lucky, and my first choice fitted just fine, and I was able to finish the backpacks in time for the holidays. And then I had a box full of macrame cords in various sizes and types that weren t the best match for the drawstring in a backpack, and no real use for them. I don t think I had ever done macrame, but I have made friendship bracelets in primary school, and a few Friendship Bracelets, But For Real Men So We Call Them Survival Bracelets(TM) more recently, so I didn t bother reading instructions or tutorials online, I just grabbed the Ashley Book of Knots to refresh myself on the knots used, and decided to make myself a small bag for an A6 book. I choose one of the thin, ~3 mm cords, Tre Sfere Macram Barbante, of which there was plenty, so that I could stumble around with no real plan. A loop of four cords, with a handle made of square knots that keeps it together. I started by looping 5 m of cord, making iirc 2 rounds of a loop about the right size to go around the book with a bit of ease, then used the ends as filler cords for a handle, wrapped them around the loop and worked square knots all over them to make a handle. Then I cut the rest of the cord into 40 pieces, each 4 m long, because I had no idea how much I was going to need (spoiler: I successfully got it wrong :D ) I joined the cords to the handle with lark head knots, 20 per side, and then I started knotting without a plan or anything, alternating between hitches and square knots, sometimes close together and sometimes leaving some free cord between them. And apparently I also completely forgot to take in-progress pictures. I kept working on this for a few months, knotting a row or two now and then, until the bag was long enough for the book, then I closed the bottom by taking one cord from the front and the corresponding on the back, knotting them together (I don t remember how) and finally I made a rigid triangle of tight square knots with all of the cords, progressively leaving out a cord from each side, and cutting it in a fringe. I then measured the remaining cords, and saw that the shortest ones were about a meter long, but the longest ones were up to 3 meters, I could have cut them much shorter at the beginning (and maybe added a couple more cords). The leftovers will be used, in some way. And then I postponed taking pictures of the finished object for a few months. The same bag, empty and showing how the sides aren't straight. Now the result is functional, but I have to admit it is somewhat ugly: not as much for the lack of a pattern (that I think came out quite fine) but because of how irregular the knots are; I m not confident that the next time I will be happy with their regularity, either, but I hope I will improve, and that s one important thing. And the other important thing is: I enjoyed making this, even if I kept interrupting the work, and I think that there may be some other macrame in my future.

29 January 2024

Russ Allbery: Review: Bluebird

Review: Bluebird, by Ciel Pierlot
Publisher: Angry Robot
Copyright: 2022
ISBN: 0-85766-967-2
Format: Kindle
Pages: 458
Bluebird is a stand-alone far-future science fiction adventure. Ten thousand years ago, a star fell into the galaxy carrying three factions of humanity. The Ascetics, the Ossuary, and the Pyrites each believe that only their god survived and the other two factions are heretics. Between them, they have conquered the rest of the galaxy and its non-human species. The only thing the factions hate worse than each other are those who attempt to stay outside the faction system. Rig used to be a Pyrite weapon designer before she set fire to her office and escaped with her greatest invention. Now she's a Nightbird, a member of an outlaw band that tries to help refugees and protect her fellow Kashrini against Pyrite genocide. On her side, she has her girlfriend, an Ascetic librarian; her ship, Bluebird; and her guns, Panache and Pizzazz. And now, perhaps, the mysterious Ginka, a Zazra empath and remarkably capable fighter who helps Rig escape from an ambush by Pyrite soldiers. Rig wants to stay alive, help her people, and defy the factions. Pyrite wants Rig's secrets and, as leverage, has her sister. What Ginka wants is not entirely clear even to Ginka. This book is absurd, but I still had fun with it. It's dangerous for me to compare things to anime given how little anime that I've watched, but Bluebird had that vibe for me: anime, or maybe Japanese RPGs or superhero comics. The storytelling is very visual, combat-oriented, and not particularly realistic. Rig is a pistol sharpshooter and Ginka is the type of undefined deadly acrobatic fighter so often seen in that type of media. In addition to her ship, Rig has a gorgeous hand-maintained racing hoverbike with a beautiful paint job. It's that sort of book. It's also the sort of book where the characters obey cinematic logic designed to maximize dramatic physical confrontations, even if their actions make no logical sense. There is no facial recognition or screening, and it's bizarrely easy for the protagonists to end up in same physical location as high-up bad guys. One of the weapon systems that's critical to the plot makes no sense whatsoever. At critical moments, the bad guys behave more like final bosses in a video game, picking up weapons to deal with the protagonists directly instead of using their supposedly vast armies of agents. There is supposedly a whole galaxy full of civilizations with capital worlds covered in planet-spanning cities, but politics barely exist and the faction leaders get directly involved in the plot. If you are looking for a realistic projection of technology or society, I cannot stress enough that this is not the book that you're looking for. You probably figured that out when I mentioned ten thousand years of war, but that will only be the beginning of the suspension of disbelief problems. You need to turn off your brain and enjoy the action sequences and melodrama. I'm normally good at that, and I admit I still struggled because the plot logic is such a mismatch with the typical novels I read. There are several points where the characters do something that seems so monumentally dumb that I was sure Pierlot was setting them up for a fall, and then I got wrong-footed because their plan worked fine, or exploded for unrelated reasons. I think this type of story, heavy on dramatic eye-candy and emotional moments with swelling soundtracks, is a lot easier to pull off in visual media where all the pretty pictures distract your brain. In a novel, there's a lot of time to think about the strategy, technology, and government structure, which for this book is not a good idea. If you can get past that, though, Rig is entertainingly snarky and Ginka, who turns out to be the emotional heart of the book, is an enjoyable character with a real growth arc. Her background is a bit simplistic and the villains are the sort of pure evil that you might expect from this type of cinematic plot, but I cared about the outcome of her story. Some parts of the plot dragged and I think the editing could have been tighter, but there was enough competence porn and banter to pull me through. I would recommend Bluebird only cautiously, since you're going to need to turn off large portions of your brain and be in the right mood for nonsensically dramatic confrontations, but I don't regret reading it. It's mostly in primary colors and the emotional conflicts are not what anyone would call subtle, but it delivers a character arc and a somewhat satisfying ending. Content warning: There is a lot of serious physical injury in this book, including surgical maiming. If that's going to bother you, you may want to give this one a pass. Rating: 6 out of 10

25 January 2024

Joachim Breitner: GHC Steering Committee Retrospective

After seven years of service as member and secretary on the GHC Steering Committee, I have resigned from that role. So this is a good time to look back and retrace the formation of the GHC proposal process and committee. In my memory, I helped define and shape the proposal process, optimizing it for effectiveness and throughput, but memory can be misleading, and judging from the paper trail in my email archives, this was indeed mostly Ben Gamari s and Richard Eisenberg s achievement: Already in Summer of 2016, Ben Gamari set up the ghc-proposals Github repository with a sketch of a process and sent out a call for nominations on the GHC user s mailing list, which I replied to. The Simons picked the first set of members, and in the fall of 2016 we discussed the committee s by-laws and procedures. As so often, Richard was an influential shaping force here.

Three ingredients For example, it was him that suggested that for each proposal we have one committee member be the Shepherd , overseeing the discussion. I believe this was one ingredient for the process effectiveness: There is always one person in charge, and thus we avoid the delays incurred when any one of a non-singleton set of volunteers have to do the next step (and everyone hopes someone else does it). The next ingredient was that we do not usually require a vote among all members (again, not easy with volunteers with limited bandwidth and occasional phases of absence). Instead, the shepherd makes a recommendation (accept/reject), and if the other committee members do not complain, this silence is taken as consent, and we come to a decision. It seems this idea can also be traced back on Richard, who suggested that once a decision is requested, the shepherd [generates] consensus. If consensus is elusive, then we vote. At the end of the year we agreed and wrote down these rules, created the mailing list for our internal, but publicly archived committee discussions, and began accepting proposals, starting with Adam Gundry s OverloadedRecordFields. At that point, there was no secretary role yet, so how I did become one? It seems that in February 2017 I started to clean-up and refine the process documentation, fixing bugs in the process (like requiring authors to set Github labels when they don t even have permissions to do that). This in particular meant that someone from the committee had to manually handle submissions and so on, and by the aforementioned principle that at every step there ought to be exactly one person in change, the role of a secretary followed naturally. In the email in which I described that role I wrote:
Simon already shoved me towards picking up the secretary hat, to reduce load on Ben.
So when I merged the updated process documentation, I already listed myself secretary . It wasn t just Simon s shoving that put my into the role, though. I dug out my original self-nomination email to Ben, and among other things I wrote:
I also hope that there is going to be clear responsibilities and a clear workflow among the committee. E.g. someone (possibly rotating), maybe called the secretary, who is in charge of having an initial look at proposals and then assigning it to a member who shepherds the proposal.
So it is hardly a surprise that I became secretary, when it was dear to my heart to have a smooth continuous process here. I am rather content with the result: These three ingredients single secretary, per-proposal shepherds, silence-is-consent helped the committee to be effective throughout its existence, even as every once in a while individual members dropped out.

Ulterior motivation I must admit, however, there was an ulterior motivation behind me grabbing the secretary role: Yes, I did want the committee to succeed, and I did want that authors receive timely, good and decisive feedback on their proposals but I did not really want to have to do that part. I am, in fact, a lousy proposal reviewer. I am too generous when reading proposals, and more likely mentally fill gaps in a specification rather than spotting them. Always optimistically assuming that the authors surely know what they are doing, rather than critically assessing the impact, the implementation cost and the interaction with other language features. And, maybe more importantly: why should I know which changes are good and which are not so good in the long run? Clearly, the authors cared enough about a proposal to put it forward, so there is some need and I do believe that Haskell should stay an evolving and innovating language but how does this help me decide about this or that particular feature. I even, during the formation of the committee, explicitly asked that we write down some guidance on Vision and Guideline ; do we want to foster change or innovation, or be selective gatekeepers? Should we accept features that are proven to be useful, or should we accept features so that they can prove to be useful? This discussion, however, did not lead to a concrete result, and the assessment of proposals relied on the sum of each member s personal preference, expertise and gut feeling. I am not saying that this was a mistake: It is hard to come up with a general guideline here, and even harder to find one that does justice to each individual proposal. So the secret motivation for me to grab the secretary post was that I could contribute without having to judge proposals. Being secretary allowed me to assign most proposals to others to shepherd, and only once in a while myself took care of a proposal, when it seemed to be very straight-forward. Sneaky, ain t it?

7 Years later For years to come I happily played secretary: When an author finished their proposal and public discussion ebbed down they would ping me on GitHub, I would pick a suitable shepherd among the committee and ask them to judge the proposal. Eventually, the committee would come to a conclusion, usually by implicit consent, sometimes by voting, and I d merge the pull request and update the metadata thereon. Every few months I d summarize the current state of affairs to the committee (what happened since the last update, which proposals are currently on our plate), and once per year gathered the data for Simon Peyton Jones annually GHC Status Report. Sometimes some members needed a nudge or two to act. Some would eventually step down, and I d sent around a call for nominations and when the nominations came in, distributed them off-list among the committee and tallied the votes. Initially, that was exciting. For a long while it was a pleasant and rewarding routine. Eventually, it became a mere chore. I noticed that I didn t quite care so much anymore about some of the discussion, and there was a decent amount of naval-gazing, meta-discussions and some wrangling about claims of authority that was probably useful and necessary, but wasn t particularly fun. I also began to notice weaknesses in the processes that I helped shape: We could really use some more automation for showing proposal statuses, notifying people when they have to act, and nudging them when they don t. The whole silence-is-assent approach is good for throughput, but not necessary great for quality, and maybe the committee members need to be pushed more firmly to engage with each proposal. Like GHC itself, the committee processes deserve continuous refinement and refactoring, and since I could not muster the motivation to change my now well-trod secretarial ways, it was time for me to step down. Luckily, Adam Gundry volunteered to take over, and that makes me feel much less bad for quitting. Thanks for that! And although I am for my day job now enjoying a language that has many of the things out of the box that for Haskell are still only language extensions or even just future proposals (dependent types, BlockArguments, do notation with ( foo) expressions and Unicode), I m still around, hosting the Haskell Interlude Podcast, writing on this blog and hanging out at ZuriHac etc.

16 January 2024

Russ Allbery: Review: Making Money

Review: Making Money, by Terry Pratchett
Series: Discworld #36
Publisher: Harper
Copyright: October 2007
Printing: November 2014
ISBN: 0-06-233499-9
Format: Mass market
Pages: 473
Making Money is the 36th Discworld novel, the second Moist von Lipwig book, and a direct sequel to Going Postal. You could start the series with Going Postal, but I would not start here. The post office is running like a well-oiled machine, Adora Belle is out of town, and Moist von Lipwig is getting bored. It's the sort of boredom that has him picking his own locks, taking up Extreme Sneezing, and climbing buildings at night. He may not realize it, but he needs something more dangerous to do. Vetinari has just the thing. The Royal Bank of Ankh-Morpork, unlike the post office before Moist got to it, is still working. It is a stolid, boring institution doing stolid, boring things for rich people. It is also the battleground for the Lavish family past-time: suing each other and fighting over money. The Lavishes are old money, the kind of money carefully entangled in trusts and investments designed to ensure the family will always have money regardless of how stupid their children are. Control of the bank is temporarily in the grasp of Joshua Lavish's widow Topsy, who is not a true Lavish, but the vultures are circling. Meanwhile, Vetinari has grand city infrastructure plans, and to carry them out he needs financing. That means he needs a functional bank, and preferably one that is much less conservative. Moist is dubious about running a bank, and even more reluctant when Topsy Lavish sees him for exactly the con artist he is. His hand is forced when she dies, and Moist discovers he has inherited her dog, Mr. Fusspot. A dog that now owns 51% of the Royal Bank and therefore is the chairman of the bank's board of directors. A dog whose safety is tied to Moist's own by way of an expensive assassination contract. Pratchett knew he had a good story with Going Postal, so here he runs the same formula again. And yes, I was happy to read it again. Moist knows very little about banking but quite a lot about pretending something will work until it does, which has more to do with banking than it does with running a post office. The bank employs an expert, Mr. Bent, who is fanatically devoted to the gold standard and the correctness of the books and has very little patience for Moist. There are golem-related hijinks. The best part of this book is Vetinari, who is masterfully manipulating everyone in the story and who gets in some great lines about politics.
"We are not going to have another wretched empire while I am Patrician. We've only just got over the last one."
Also, Vetinari processing dead letters in the post office was an absolute delight. Making Money does have the recurring Pratchett problem of having a fairly thin plot surrounded by random... stuff. Moist's attempts to reform the city currency while staying ahead of the Lavishes is only vaguely related to Mr. Bent's plot arc. The golems are unrelated to the rest of the plot other than providing a convenient deus ex machina. There is an economist making water models in the bank basement with an Igor, which is a great gag but has essentially nothing to do with the rest of the book. One of the golems has been subjected to well-meaning older ladies and 1950s etiquette manuals, which I thought was considerably less funny (and somewhat creepier) than Pratchett did. There are (sigh) clowns, which continue to be my least favorite Ankh-Morpork world-building element. At least the dog was considerably less annoying than I was afraid it was going to be. This grab-bag randomness is a shame, since I think there was room here for a more substantial plot that engaged fully with the high weirdness of finance. Unfortunately, this was a bit like the post office in Going Postal: Pratchett dives into the subject just enough to make a few wry observations and a few funny quips, and then resolves the deeper issues off-camera. Moist tries to invent fiat currency, because of course he does, and Pratchett almost takes on the gold standard, only to veer away at the last minute into vigorous hand-waving. I suspect part of the problem is that I know a little bit too much about finance, so I kept expecting Pratchett to take the humorous social commentary a couple of levels deeper. On a similar note, the villains have great potential that Pratchett undermines by adding too much over-the-top weirdness. I wish Cosmo Lavish had been closer to what he appears to be at the start of the book: a very wealthy and vindictive man (and a reference to Cosimo de Medici) who doesn't have Moist's ability to come up with wildly risky gambits but who knows considerably more than he does about how banking works. Instead, Pratchett gives him a weird obsession that slowly makes him less sinister and more pathetic, which robs the book of a competent antagonist for Moist. The net result is still a fun book, and a solid Discworld entry, but it lacks the core of the best series entries. It felt more like a skit comedy show than a novel, but it's an excellent skit comedy show with the normal assortment of memorable Pratchettisms. Certainly if you've read this far, or even if you've only read Going Postal, you'll want to read Making Money as well. Followed by Unseen Academicals. The next Moist von Lipwig book is Raising Steam. Rating: 8 out of 10

15 January 2024

Russ Allbery: Review: The Library of Broken Worlds

Review: The Library of Broken Worlds, by Alaya Dawn Johnson
Publisher: Scholastic Press
Copyright: June 2023
ISBN: 1-338-29064-9
Format: Kindle
Pages: 446
The Library of Broken Worlds is a young-adult far-future science fantasy. So far as I can tell, it's stand-alone, although more on that later in the review. Freida is the adopted daughter of Nadi, the Head Librarian, and her greatest wish is to become a librarian herself. When the book opens, she's a teenager in highly competitive training. Freida is low-wetware, without the advanced and expensive enhancements of many of the other students competing for rare and prized librarian positions, which she makes up for by being the most audacious. She doesn't need wetware to commune with the library material gods. If one ventures deep into their tunnels and consumes their crystals, direct physical communion is possible. The library tunnels are Freida's second home, in part because that's where she was born. She was created by the Library, and specifically by Iemaja, the youngest of the material gods. Precisely why is a mystery. To Nadi, Freida is her daughter. To Quinn, Nadi's main political rival within the library, Freida is a thing, a piece of the library, a secondary and possibly rogue AI. A disruptive annoyance. The Library of Broken Worlds is the sort of science fiction where figuring out what is going on is an integral part of the reading experience. It opens with a frame story of an unnamed girl (clearly Freida) waking the god Nameren and identifying herself as designed for deicide. She provokes Nameren's curiosity and offers an Arabian Nights bargain: if he wants to hear her story, he has to refrain from killing her for long enough for her to tell it. As one might expect, the main narrative doesn't catch up to the frame story until the very end of the book. The Library is indeed some type of library that librarians can search for knowledge that isn't available from more mundane sources, but Freida's personal experience of it is almost wholly religious and oracular. The library's material gods are identified as AIs, but good luck making sense of the story through a science fiction frame, even with a healthy allowance for sufficiently advanced technology being indistinguishable from magic. The symbolism and tone is entirely fantasy, and late in the book it becomes clear that whatever the material gods are, they're not simple technological AIs in the vein of, say, Banks's Ship Minds. Also, the Library is not solely a repository of knowledge. It is the keeper of an interstellar peace. The Library was founded after the Great War, to prevent a recurrence. It functions as a sort of legal system and grand tribunal in ways that are never fully explained. As you might expect, that peace is based more on stability than fairness. Five of the players in this far future of humanity are the Awilu, the most advanced society and the first to leave Earth (or Tierra as it's called here); the Mah m, who possess the material war god Nameren of the frame story; the Lunars and Martians, who dominate the Sol system; and the surviving Tierrans, residents of a polluted and struggling planet that is ruthlessly exploited by the Lunars. The problem facing Freida and her friends at the start of the book is a petition brought by a young Tierran against Lunar exploitation of his homeland. His name is Joshua, and Freida is more than half in love with him. Joshua's legal argument involves interpretation of the freedom node of the treaty that ended the Great War, a node that precedent says gives the Lunars the freedom to exploit Tierra, but which Joshua claims has a still-valid originalist meaning granting Tierrans freedom from exploitation. There is, in short, a lot going on in this book, and "never fully explained" is something of a theme. Freida is telling a story to Nameren and only explains things Nameren may not already know. The reader has to puzzle out the rest from the occasional hint. This is made more difficult by the tendency of the material gods to communicate only in visions or guided hallucinations, full of symbolism that the characters only partly explain to the reader. Nonetheless, this did mostly work, at least for me. I started this book very confused, but by about the midpoint it felt like the background was coming together. I'm still not sure I understand the aurochs, baobab, and cicada symbolism that's so central to the framing story, but it's the pleasant sort of stretchy confusion that gives my brain a good workout. I wish Johnson had explained a few more things plainly, particularly near the end of the book, but my remaining level of confusion was within my tolerances. Unfortunately, the ending did not work for me. The first time I read it, I had no idea what it meant. Lots of baffling, symbolic things happened and then the book just stopped. After re-reading the last 10%, I think all the pieces of an ending and a bit of an explanation are there, but it's absurdly abbreviated. This is another book where the author appears to have been finished with the story before I was. This keeps happening to me, so this probably says something more about me than it says about books, but I want books to have an ending. If the characters have fought and suffered through the plot, I want them to have some space to be happy and to see how their sacrifices play out, with more detail than just a few vague promises. If much of the book has been puzzling out the nature of the world, I would like some concrete confirmation of at least some of my guesswork. And if you're going to end the book on radical transformation, I want to see the results of that transformation. Johnson does an excellent job showing how brutal the peace of the powerful can be, and is willing to light more things on fire over the course of this book than most authors would, but then doesn't offer the reader much in the way of payoff. For once, I wish this stand-alone turned out to be a series. I think an additional book could be written in the aftermath of this ending, and I would definitely read that novel. Johnson has me caring deeply about these characters and fascinated by the world background, and I'd happily spend another 450 pages finding out what happens next. But, frustratingly, I think this ending was indeed intended to wrap up the story. I think this book may fall between a few stools. Science fiction readers who want mysterious future worlds to be explained by the end of the book are going to be frustrated by the amount of symbolism, allusion, and poetic description. Literary fantasy readers, who have a higher tolerance for that style, are going to wish for more focused and polished writing. A lot of the story is firmly YA: trying and failing to fit in, developing one's identity, coming into power, relationship drama, great betrayals and regrets, overcoming trauma and abuse, and unraveling lies that adults tell you. But this is definitely not a straight-forward YA plot or world background. It demands a lot from the reader, and while I am confident many teenage readers would rise to that challenge, it seems like an awkward fit for the YA marketing category. About 75% of the way in, I would have told you this book was great and you should read it. The ending was a let-down and I'm still grumpy about it. I still think it's worth your attention if you're in the mood for a sink-or-swim type of reading experience. Just be warned that when the ride ends, I felt unceremoniously dumped on the pavement. Content warnings: Rape, torture, genocide. Rating: 7 out of 10

12 January 2024

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In December, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 16.0h (out of 26.25h assigned and 8.75h from previous period), thus carrying over 19.0h to the next month.
  • Bastien Roucari s did 16.0h (out of 16.0h assigned and 4.0h from previous period), thus carrying over 4.0h to the next month.
  • Ben Hutchings did 8.0h (out of 7.25h assigned and 16.75h from previous period), thus carrying over 16.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 8.0h (out of 26.75h assigned and 8.25h from previous period), thus carrying over 27.0h to the next month.
  • Guilhem Moulin did 25.0h (out of 18.0h assigned and 7.0h from previous period).
  • Holger Levsen did 5.5h (out of 5.5h assigned).
  • Jochen Sprickerhof did 0.0h (out of 0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Lee Garrett did 0.0h (out of 25.75h assigned and 9.25h from previous period), thus carrying over 35.0h to the next month.
  • Markus Koschany did 35.0h (out of 35.0h assigned).
  • Roberto C. S nchez did 9.5h (out of 5.5h assigned and 6.5h from previous period), thus carrying over 2.5h to the next month.
  • Santiago Ruano Rinc n did 8.255h (out of 3.26h assigned and 12.745h from previous period), thus carrying over 7.75h to the next month.
  • Sean Whitton did 4.25h (out of 3.25h assigned and 6.75h from previous period), thus carrying over 5.75h to the next month.
  • Sylvain Beucler did 16.5h (out of 21.25h assigned and 13.75h from previous period), thus carrying over 18.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 10.25h (out of 12.0h assigned), thus carrying over 1.75h to the next month.
  • Utkarsh Gupta did 18.75h (out of 11.25h assigned and 13.5h from previous period), thus carrying over 6.0h to the next month.

Evolution of the situation In December, we have released 29 DLAs. A particularly notable update in December was prepared by LTS contributor Santiago Ruano Rinc n for the openssh package. The updated produced DLA-3694-1 and included a fix for the Terrapin Attack (CVE-2023-48795), which was a rather serious flaw in the SSH protocol itself. The package bluez was the subject of another notable update by LTS contributor Chris Lamb, which resulted in DLA-3689-1 to address an insecure default configuration which allowed attackers to inject keyboard commands over Bluetooth without first authenticating. The LTS team continues its efforts to have a positive impact beyond the boundaries of LTS. Several contributors worked on packages, preparing LTS updates, but also preparing patches or full updates which were uploaded to the unstable, stable, and oldstable distributions, including: Guilhem Moulin s update of tinyxml (uploads to LTS and unstable and patches submitted to the security team for stable and oldstable); Guilhem Moulin s update of xerces-c (uploads to LTS and unstable and patches submitted to the security team for oldstable); Thorsten Alteholz s update of libde265 (uploads to LTS and stable and additional patches submitted to the maintainer for stable and oldstable); Thorsten Alteholz s update of cjson (upload to LTS and patches submitted to the maintainer for stable and oldstable); and Tobias Frost s update of opendkim (sponsor maintainer-prepared upload to LTS and additionally prepared updates for stable and oldstable). Going beyond Debian and looking to the broader community, LTS contributor Bastien Roucari s was contacted by SUSE concerning an update he had prepared for zbar. He was able to assist by coordinating with the former organization of the original zbar author to secure for SUSE access to information concerning the exploits. This has enabled another distribution to benefit from the work done in support of LTS and from the assistance of Bastien in coordinating the access to information. Finally, LTS contributor Santiago Ruano Rinc n continued work relating to how updates for packages in statically-linked language ecosystems (e.g., Go, Rust, and others) are handled. The work is presently focused on more accurately and reliably identifying which packages are impacted in a given update scenario to enable notifications to be published so that users will be made aware of these situations as they occur. As the work continues, it will eventually result in improvements to Debian infrustructure so that the LTS team and Security team are able to manage updates of this nature in a more consistent way.

Thanks to our sponsors Sponsors that joined recently are in bold.

11 January 2024

Matthias Klumpp: Wayland really breaks things Just for now?

This post is in part a response to an aspect of Nate s post Does Wayland really break everything? , but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense! The shift towards people seeing Linux more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this this holistic approach is something I always wanted! Furthermore, I think Wayland removes a lot of functionality that shouldn t exist in a modern compositor and that s a good thing too! Some of X11 s features and design decisions had clear drawbacks that we shouldn t replicate. I highly recommend to read Nate s blog post, it s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But! But! Of course there was a but coming  I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches. But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available. A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost). In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.
KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of easiest : In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though. Here s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all. So if they learn that porting to Linux not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the new Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen. Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt s documentation you will often find texts like works everywhere except for on Wayland compositors or mobile 4. Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What s missing?

Window positioning SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps. It is important to note that this is not a legacy design, but in many cases an intentional choice these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop). Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way. I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again. Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it s not the end of the world if every window has the same icon, but it s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now. I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow W icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we re not fully there yet.

Wayland is frustrating for (some) application authors As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result. Wayland does break screen sharing, global hotkeys, gaming latency (via no tearing ) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except use Xwayland and be on emulation as second-class citizen forever , it just results in very frustrated application developers. For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art! Wayland, via its protocol limitations, forces a certain way to build application UX often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow. Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out? So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question: Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity? I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors. If the answer for your environment turns out to be Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability , then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment. If the answer turns out to be We do want some portability , the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren t. We can t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits). You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them. To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier. What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp!  I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes
  1. Apologies for the clickbait-y title it comes with the subject
  2. When I talk about Wayland I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
  3. I would have picked a picture from our lab, but that would have needed permission first
  4. Qt has awesome platform issues pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren t particularly helpful when porting (but still essential to know).
  5. Besides issues with Nvidia hardware CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.

8 January 2024

Antoine Beaupr : Last year on this blog

So this blog is now celebrating its 21st birthday (or 20 if you count from zero, or 18 if you want to be pedantic), and I figured I would do this yearly thing of reviewing how that went.

Number of posts 2022 was the official 20th anniversary in any case, and that was one of my best years on record, with 46 posts, surpassed only by the noisy 2005 (62) and matching 2006 (46). 2023, in comparison, was underwhelming: a feeble 11 posts! What happened! Well, I was busy with other things, mostly away from keyboard, that I will not bore you with here... The other thing that happened is that the one-liner I used to collect stats was broken (it counted folders and other unrelated files) and wildly overestimated 2022! Turns out I didn't write that much then:
anarc.at$ ls blog   grep '^[0-9][0-9][0-9][0-9].*.md'   se
d s/-.*//   sort   uniq -c    sort -n -k2
     57 2005
     43 2006
     20 2007
     20 2008
      7 2009
     13 2010
     16 2011
     11 2012
     13 2013
      5 2014
     13 2015
     18 2016
     29 2017
     27 2018
     17 2019
     18 2020
     14 2021
     28 2022
     10 2023
      1 2024
But even that is inaccurate because, in ikiwiki, I can tag any page as being featured on the blog. So we actually need to process the HTML itself because we don't have much better on hand without going through ikiwiki's internals:
anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/   grep 'href="\./'   grep -o 20[0-9][0-9]   sort   uniq -c 
     56 2005
     42 2006
     19 2007
     18 2008
      6 2009
     12 2010
     15 2011
     10 2012
     11 2013
      3 2014
     15 2015
     32 2016
     50 2017
     37 2018
     19 2019
     19 2020
     15 2021
     28 2022
     13 2023
Which puts the top 10 years at:
$ curl -sSL https://anarc.at/blog/   grep 'href="\./'   grep -o 20[0-9][0-9]   sort   uniq -c    sort -nr   head -10
     56 2005
     50 2017
     42 2006
     37 2018
     32 2016
     28 2022
     19 2020
     19 2019
     19 2007
     18 2008
Anyway. 2023 is certainly not a glorious year in that regard, in any case.

Visitors In terms of visits, however, we had quite a few hits. According to Goatcounter, I had 122 300 visits in 2023! 2022, in comparison, had 89 363, so that's quite a rise.

What you read I seem to have hit the Hacker News front page at least twice. I say "seem" because it's actually pretty hard to tell what the HN frontpage actually is on any given day. I had 22k visits on 2023-03-13, in any case, and you can't see me on the front that day. We do see a post of mine on 2023-09-02, all the way down there, which seem to have generated another 10k visits. In any case, here were the most popular stories for you fine visitors:
  • Framework 12th gen laptop review: 24k visits, which is surprising for a 13k words article "without images", as some critics have complained. 15k referred by Hacker News. Good reference and time-consuming benchmarks, slowly bit-rotting. That is, by far, my most popular article ever. A popular article in 2021 or 2022 was around 6k to 9k, so that's a big one. I suspect it will keep getting traffic for a long while.
  • Calibre replacement considerations: 15k visits, most of which without a referrer. Was actually an old article, but I suspect HN brought it back to light. I keep updating that wiki page regularly when I find new things, but I'm still using Calibre to import ebooks.
  • Hacking my Kobo Clara HD: is not new but always gathering more and more hits, it had 1800 hits in the first year, 4600 hits last year and now brought 6400 visitors to the blog! Not directly related, but this iFixit battery replacement guide I wrote also seem to be quite popular
Everything else was published before 2023. Replacing Smokeping with Prometheus is still around and Looking at Wayland terminal emulators makes an entry in the top five.

Where you've been People send less and less private information when they browse the web. The number of visitors without referrers was 41% in 2021, it rose to 44% in 2023. Most of the remaining traffic comes from Google, but Hacker News is now a significant chunk, almost as big as Google. In 2021, Google represented 23% of my traffic, in 2022, it was down to 15% so 18% is actually a rise from last year, even if it seems much smaller than what I usually think of.
Ratio Referrer Visits
18% Google 22 098
13% Hacker News 16 003
2% duckduckgo.com 2 640
1% community.frame.work 1 090
1% missing.csail.mit.edu 918
Note that Facebook and Twitter do not appear at all in my referrers.

Where you are Unsurprisingly, most visits still come from the US:
Ratio Country Visits
26% United States 32 010
14% France 17 046
10% Germany 11 650
6% Canada 7 425
5% United Kingdom 6 473
3% Netherlands 3 436
Those ratios are nearly identical to last year, but quite different from 2021, where Germany and France were more or less reversed. Back in 2021, I mentioned there was a long tail of countries with at least one visit, with 160 countries listed. I expanded that and there's now 182 countries in that list, almost all of the 193 member states in the UN.

What you were Chrome's dominance continues to expand, even on readers of this blog, gaining two percentage points from Firefox compared to 2021.
Ratio Browser Visits
49% Firefox 60 126
36% Chrome 44 052
14% Safari 17 463
1% Others N/A
It seems like, unfortunately, my Lynx and Haiku users have not visited in the past year. It seems like trying to read those metrics is like figuring out tea leaves... In terms of operating systems:
Ratio OS Visits
28% Linux 34 010
23% macOS 28 728
21% Windows 26 303
17% Android 20 614
10% iOS 11 741
Again, Linux and Mac are over-represented, and Android and iOS are under-represented.

What is next I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world... So anyway, thanks for coming, faithful reader, and see you in the coming 2024 year...

4 January 2024

Michael Ablassmeier: Migrating a system to Hetzner cloud using REAR and kexec

I needed to migrate an existing system to an Hetzner cloud VPS. While it is possible to attach KVM consoles and custom ISO images to dedicated servers, i didn t find any way to do so with regular cloud instances. For system migrations i usually use REAR, which has never failed me. (and also has saved my ass during recovery multiple times). It s an awesome utility! It s possible to do this using the Hetzner recovery console too, but using REAR is very convenient here, because it handles things like re-creating the partition layout and network settings automatically! The steps are:

Example To create a rescue image on the source system:
apt install rear
echo OUTPUT=ISO > /etc/rear/local.conf
rear mkrescue -v
[..]
Wrote ISO image: /var/lib/rear/output/rear-debian12.iso (185M)
My source system had a 128 GB disk, so i registered an instance on Hetzner cloud with greater disk size to make things easier: image Now copy the ISO image to the newly created instance and extract its data:
 apt install kexec-tools
 scp rear-debian12.iso root@49.13.193.226:/tmp/
 modprobe loop
 mount -o loop rear-debian12.iso /mnt/
 cp /mnt/isolinux/kernel /tmp/
 cp /mnt/isolinux/initrd.cgz /tmp/
Install kexec if not installed already:
 apt install kexec-tools
Note down the current gateway configuration, this is required later on to make the REAR recovery console reachable via SSH:
root@testme:~# ip route
default via 172.31.1.1 dev eth0
172.31.1.1 dev eth0 scope link
Reboot the running VPS instance into the REAR recovery image using somewhat the same kernel cmdline:
root@testme:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.1.0-13-amd64 root=UUID=5174a81e-5897-47ca-8fe4-9cd19dc678c4 ro consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0
kexec --initrd /tmp/initrd.cgz --command-line="consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0" /tmp/kernel
Connection to 49.13.193.226 closed by remote host.
Connection to 49.13.193.226 closed
Now watch the system on the Console booting into the REAR system: image Login the recovery console (root without password) and fix its default route to make it reachable:
ip addr
[..]
2: enp1s0
..
$ ip route add 172.31.1.1 dev enp1s0
$ ip route add default via 172.31.1.1
ping 49.13.193.226
64 bytes from 49.13.193.226: icmp_seq=83 ttl=52 time=27.7 ms
The network configuration might differ, the source system in this example used DHCP, as the target does. If REAR detects changed static network configuration it guides you through the setup pretty nicely. Login via SSH (REAR will store your ssh public keys in the image) and start the recovery process, follow the steps as suggested by REAR:
ssh -l root 49.13.193.226
Welcome to Relax-and-Recover. Run "rear recover" to restore your system !
RESCUE debian12:~ # rear recover
Relax-and-Recover 2.7 / Git
Running rear recover (PID 673 date 2024-01-04 19:20:22)
Using log file: /var/log/rear/rear-debian12.log
Running workflow recover within the ReaR rescue/recovery system
Will do driver migration (recreating initramfs/initrd)
Comparing disks
Device vda does not exist (manual configuration needed)
Switching to manual disk layout configuration (GiB sizes rounded down to integer)
/dev/vda had size 137438953472 (128 GiB) but it does no longer exist
/dev/sda was not used on the original system and has now 163842097152 (152 GiB)
Original disk /dev/vda does not exist (with same size) in the target system
Using /dev/sda (the only available of the disks) for recreating /dev/vda
Current disk mapping table (source => target):
  /dev/vda => /dev/sda
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
[..]
User confirmed recreated disk layout
[..]
This step re-recreates your original disk layout and mounts it to /mnt/local/ (this example uses a pretty lame layout, but usually REAR will handle things like lvm/btrfs just nicely):
mount
/dev/sda3 on /mnt/local type ext4 (rw,relatime,errors=remount-ro)
/dev/sda1 on /mnt/local/boot type ext4 (rw,relatime)
Now clone your source systems data to /mnt/local/ with whatever utility you like to use and exit the recovery step. After confirming everything went well, REAR will setup the bootloader (and all other config details like fstab entries and adjusted network configuration) for you as required:
rear> exit
Did you restore the backup to /mnt/local ? Are you ready to continue recovery ? yes
User confirmed restored files
Updated initramfs with new drivers for this system.
Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist).
Installing GRUB2 boot loader...
Determining where to install GRUB2 (no GRUB2_INSTALL_DEVICES specified)
Found possible boot disk /dev/sda - installing GRUB2 there
Finished 'recover'. The target system is mounted at '/mnt/local'.
Exiting rear recover (PID 7103) and its descendant processes ...
Running exit tasks
Now reboot the recovery console and watch it boot into your target systems configuration: image Being able to use this procedure for complete disaster recovery within Hetzner cloud VPS (using off-site backups) gives me a better feeling, too.

2 January 2024

Matthew Garrett: Dealing with weird ELF libraries

Libraries are collections of code that are intended to be usable by multiple consumers (if you're interested in the etymology, watch this video). In the old days we had what we now refer to as "static" libraries, collections of code that existed on disk but which would be copied into newly compiled binaries. We've moved beyond that, thankfully, and now make use of what we call "dynamic" or "shared" libraries - instead of the code being copied into the binary, a reference to the library function is incorporated, and at runtime the code is mapped from the on-disk copy of the shared object[1]. This allows libraries to be upgraded without needing to modify the binaries using them, and if multiple applications are using the same library at once it only requires that one copy of the code be kept in RAM.

But for this to work, two things are necessary: when we build a binary, there has to be a way to reference the relevant library functions in the binary; and when we run a binary, the library code needs to be mapped into the process.

(I'm going to somewhat simplify the explanations from here on - things like symbol versioning make this a bit more complicated but aren't strictly relevant to what I was working on here)

For the first of these, the goal is to replace a call to a function (eg, printf()) with a reference to the actual implementation. This is the job of the linker rather than the compiler (eg, if you use the -c argument to tell gcc to simply compile to an object rather than linking an executable, it's not going to care about whether or not every function called in your code actually exists or not - that'll be figured out when you link all the objects together), and the linker needs to know which symbols (which aren't just functions - libraries can export variables or structures and so on) are available in which libraries. You give the linker a list of libraries, it extracts the symbols available, and resolves the references in your code with references to the library.

But how is that information extracted? Each ELF object has a fixed-size header that contains references to various things, including a reference to a list of "section headers". Each section has a name and a type, but the ones we're interested in are .dynstr and .dynsym. .dynstr contains a list of strings, representing the name of each exported symbol. .dynsym is where things get more interesting - it's a list of structs that contain information about each symbol. This includes a bunch of fairly complicated stuff that you need to care about if you're actually writing a linker, but the relevant entries for this discussion are an index into .dynstr (which means the .dynsym entry isn't sufficient to know the name of a symbol, you need to extract that from .dynstr), along with the location of that symbol within the library. The linker can parse this information and obtain a list of symbol names and addresses, and can now replace the call to printf() with a reference to libc instead.

(Note that it's not possible to simply encode this as "Call this address in this library" - if the library is rebuilt or is a different version, the function could move to a different location)

Experimentally, .dynstr and .dynsym appear to be sufficient for linking a dynamic library at build time - there are other sections related to dynamic linking, but you can link against a library that's missing them. Runtime is where things get more complicated.

When you run a binary that makes use of dynamic libraries, the code from those libraries needs to be mapped into the resulting process. This is the job of the runtime dynamic linker, or RTLD[2]. The RTLD needs to open every library the process requires, map the relevant code into the process's address space, and then rewrite the references in the binary into calls to the library code. This requires more information than is present in .dynstr and .dynsym - at the very least, it needs to know the list of required libraries.

There's a separate section called .dynamic that contains another list of structures, and it's the data here that's used for this purpose. For example, .dynamic contains a bunch of entries of type DT_NEEDED - this is the list of libraries that an executable requires. There's also a bunch of other stuff that's required to actually make all of this work, but the only thing I'm going to touch on is DT_HASH. Doing all this re-linking at runtime involves resolving the locations of a large number of symbols, and if the only way you can do that is by reading a list from .dynsym and then looking up every name in .dynstr that's going to take some time. The DT_HASH entry points to a hash table - the RTLD hashes the symbol name it's trying to resolve, looks it up in that hash table, and gets the symbol entry directly (it still needs to resolve that against .dynstr to make sure it hasn't hit a hash collision - if it has it needs to look up the next hash entry, but this is still generally faster than walking the entire .dynsym list to find the relevant symbol). There's also DT_GNU_HASH which fulfills the same purpose as DT_HASH but uses a more complicated algorithm that performs even better. .dynamic also contains entries pointing at .dynstr and .dynsym, which seems redundant but will become relevant shortly.

So, .dynsym and .dynstr are required at build time, and both are required along with .dynamic at runtime. This seems simple enough, but obviously there's a twist and I'm sorry it's taken so long to get to this point.

I bought a Synology NAS for home backup purposes (my previous solution was a single external USB drive plugged into a small server, which had uncomfortable single point of failure properties). Obviously I decided to poke around at it, and I found something odd - all the libraries Synology ships were entirely lacking any ELF section headers. This meant no .dynstr, .dynsym or .dynamic sections, so how was any of this working? nm asserted that the libraries exported no symbols, and readelf agreed. If I wrote a small app that called a function in one of the libraries and built it, gcc complained that the function was undefined. But executables on the device were clearly resolving the symbols at runtime, and if I loaded them into ghidra the exported functions were visible. If I dlopen()ed them, dlsym() couldn't resolve the symbols - but if I hardcoded the offset into my code, I could call them directly.

Things finally made sense when I discovered that if I passed the --use-dynamic argument to readelf, I did get a list of exported symbols. It turns out that ELF is weirder than I realised. As well as the aforementioned section headers, ELF objects also include a set of program headers. One of the program header types is PT_DYNAMIC. This typically points to the same data that's present in the .dynamic section. Remember when I mentioned that .dynamic contained references to .dynsym and .dynstr? This means that simply pointing at .dynamic is sufficient, there's no need to have separate entries for them.

The same information can be reached from two different locations. The information in the section headers is used at build time, and the information in the program headers at run time[3]. I do not have an explanation for this. But if the information is present in two places, it seems obvious that it should be able to reconstruct the missing section headers in my weird libraries? So that's what this does. It extracts information from the DYNAMIC entry in the program headers and creates equivalent section headers.

There's one thing that makes this more difficult than it might seem. The section header for .dynsym has to contain the number of symbols present in the section. And that information doesn't directly exist in DYNAMIC - to figure out how many symbols exist, you're expected to walk the hash tables and keep track of the largest number you've seen. Since every symbol has to be referenced in the hash table, once you've hit every entry the largest number is the number of exported symbols. This seemed annoying to implement, so instead I cheated, added code to simply pass in the number of symbols on the command line, and then just parsed the output of readelf against the original binaries to extract that information and pass it to my tool.

Somehow, this worked. I now have a bunch of library files that I can link into my own binaries to make it easier to figure out how various things on the Synology work. Now, could someone explain (a) why this information is present in two locations, and (b) why the build-time linker and run-time linker disagree on the canonical source of truth?

[1] "Shared object" is the source of the .so filename extension used in various Unix-style operating systems
[2] You'll note that "RTLD" is not an acryonym for "runtime dynamic linker", because reasons
[3] For environments using the GNU RTLD, at least - I have no idea whether this is the case in all ELF environments

comment count unavailable comments

1 January 2024

Tim Retout: Prevent DOM-XSS with Trusted Types - a smarter DevSecOps approach

It can be incredibly easy for a frontend developer to accidentally write a client-side cross-site-scripting (DOM-XSS) security issue, and yet these are hard for security teams to detect. Vulnerability scanners are slow, and suffer from false positives. Can smarter collaboration between development, operations and security teams provide a way to eliminate these problems altogether? Google claims that Trusted Types has all but eliminated DOM-XSS exploits on those of their sites which have implemented it. Let s find out how this can work!

DOM-XSS vulnerabilities are easy to write, but hard for security teams to catch It is very easy to accidentally introduce a client-side XSS problem. As an example of what not to do, suppose you are setting an element s text to the current URL, on the client side:
// Don't do this
para.innerHTML = location.href;
Unfortunately, an attacker can now manipulate the URL (and e.g. send this link in a phishing email), and any HTML tags they add will be interpreted by the user s browser. This could potentially be used by the attacker to send private data to a different server. Detecting DOM-XSS using vulnerability scanning tools is challenging - typically this requires crawling each page of the website and attempting to detect problems such as the one above, but there is a significant risk of false positives, especially as the complexity of the logic increases. There are already ways to avoid these exploits developers should validate untrusted input before making use of it. There are libraries such as DOMPurify which can help with sanitization.1 However, if you are part of a security team with responsibility for preventing these issues, it can be complex to understand whether you are at risk. Different developer teams may be using different techniques and tools. It may be impossible for you to work closely with every developer so how can you know that the frontend team have used these libraries correctly?

Trusted Types closes the DevSecOps feedback loop for DOM-XSS, by allowing Ops and Security to verify good Developer practices Trusted Types enforces sanitization in the browser2, by requiring the web developer to assign a particular kind of JavaScript object rather than a native string to .innerHTML and other dangerous properties. Provided these special types are created in an appropriate way, then they can be trusted not to expose XSS problems. This approach will work with whichever tools the frontend developers have chosen to use, and detection of issues can be rolled out by infrastructure engineers without requiring frontend code changes.

Content Security Policy allows enforcement of security policies in the browser itself Because enforcing this safer approach in the browser for all websites would break backwards-compatibility, each website must opt-in through Content Security Policy headers. Content Security Policy (CSP) is a mechanism that allows web pages to restrict what actions a browser should execute on their page, and a way for the site to receive reports if the policy is violated. Diagram showing a browser communicating with a web server. Content-Security-Policy headers are returned by the URL &ldquo;/&rdquo;, and the browser reports any security violations to &ldquo;/csp&rdquo;. Figure 1: Content-Security-Policy browser communication This is revolutionary, because it allows servers to receive feedback in real time on errors that may be appearing in the browser s console.

Trusted Types can be rolled out incrementally, with continuous feedback Web.dev s article on Trusted Types explains how to safely roll out the feature using the features of CSP itself:
  • Deploy a CSP collector if you haven t already
  • Switch on CSP reports without enforcement (via Content-Security-Policy-Report-Only headers)
  • Iteratively review and fix the violations
  • Switch to enforcing mode when there are a low enough rate of reports
Static analysis in a continuous integration pipeline is also sensible you want to prevent regressions shipping in new releases before they trigger a flood of CSP reports. This will also give you a chance of finding any low-traffic vulnerable pages.

Smart security teams will use techniques like Trusted Types to eliminate entire classes of bugs at a time Rather than playing whack-a-mole with unreliable vulnerability scanning or bug bounties, techniques such as Trusted Types are truly in the spirit of Secure by Design build high quality in from the start of the engineering process, and do this in a way which closes the DevSecOps feedback loop between your Developer, Operations and Security teams.

  1. Sanitization libraries are especially needed when the examples become more complex, e.g. if the application must manipulate the input. DOMPurify version 1.0.9 also added Trusted Types support, so can still be used to help developers adopt this feature.
  2. Trusted Types has existed in Chrome and Edge since 2020, and should soon be coming to Firefox as well. However, it s not necessary to wait for Firefox or Safari to add support, because the large market share of Chrome and Edge will let you identify and fix your site s DOM-XSS issues, even if you do not set enforcing mode, and users of all browsers will benefit. Even so, it is great that Mozilla is now on board.

Next.

Previous.